Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
In this work, we propose a balanced multicomponent and multilayer neural network (MMNN) structure to accurately and efficiently approximate functions with complex features in terms of both degrees of freedom and computational cost. The main idea is inspired by a multicomponent approach in which each component can be effectively approximated by a single-layer network, combined with a multilayer decomposition strategy to capture the complexity of the target function. Although MMNNs can be viewed as a simple modification of fully connected neural networks (FCNNs) or multilayer perceptrons (MLPs) by introducing balanced multicomponent structures, they achieve a significant reduction in training parameters, a much more efficient training process, and improved accuracy compared to FCNNs or MLPs. Extensive numerical experiments demonstrate the effectiveness of MMNNs in approximating highly oscillatory functions and their ability to automatically adapt to localized features. Our code and implementations are available at GitHub.more » « lessFree, publicly-accessible full text available October 31, 2026
-
Abstract In this work, we present a comprehensive study combining mathematical and computational analysis to explain why a two-layer neural network struggles to handle high frequencies in both approximation and learning, especially when machine precision, numerical noise and computational cost are significant factors in practice. Specifically, we investigate the following fundamental computational issues: (1) the minimal numerical error achievable under finite precision, (2) the computational cost required to attain a given accuracy and (3) the stability of the method with respect to perturbations. The core of our analysis lies in the conditioning of the representation and its learning dynamics. Explicit answers to these questions are provided, along with supporting numerical evidence.more » « less
-
Free, publicly-accessible full text available February 28, 2026
-
Free, publicly-accessible full text available March 1, 2026
-
IRCNN: A novel signal decomposition approach based on iterative residue convolutional neural networkFree, publicly-accessible full text available November 1, 2025
-
The Iterative Filtering method is a technique aimed at the decomposition of non-stationary and non-linear signals into simple oscillatory components. This method, proposed a decade ago as an alternative technique to the Empirical Mode Decomposition, has been used extensively in many applied fields of research and studied, from a mathematical point of view, in several papers published in the last few years. However, even if its convergence and stability are now established both in the continuous and discrete setting, it is still an open problem to understand up to what extent this approach can separate two close-by frequencies contained in a signal. In this paper, first we recall previously discovered theoretical results about Iterative Filtering. Afterward, we prove a few new theorems regarding the ability of this method in separating two nearby frequencies both in the case of continuously and discrete sampled signals. Among them, we prove a theorem which allows to construct filters which captures, up to machine precision, a specific frequency. We run numerical tests to confirm our findings and to compare the performance of Iterative Filtering with the one of Empirical Mode Decomposition and Synchrosqueezing methods. All the results presented confirm the ability of the technique under investigation in addressing the fundamental “one or two frequencies” question.more » « less
An official website of the United States government
